25 research outputs found

    Boundaries of Semantic Distraction: Dominance and Lexicality Act at Retrieval

    Get PDF
    Three experiments investigated memory for semantic information with the goal of determining boundary conditions for the manifestation of semantic auditory distraction. Irrelevant speech disrupted the free recall of semantic category-exemplars to an equal degree regardless of whether the speech coincided with presentation or test phases of the task (Experiment 1) and occurred regardless of whether it comprised random words or coherent sentences (Experiment 2). The effects of background speech were greater when the irrelevant speech was semantically related to the to-be-remembered material, but only when the irrelevant words were high in output dominance (Experiment 3). The implications of these findings in relation to the processing of task material and the processing of background speech is discussed

    The Impact of Fillers on Lineup Performance

    Get PDF
    Filler siphoning theory posits that the presence of fillers (known innocents) in a lineup protects an innocent suspect from being chosen by siphoning choices away from that innocent suspect. This mechanism has been proposed as an explanation for why simultaneous lineups (viewing all lineup members at once) induces better performance than showups (one-person identification procedures). We implemented filler siphoning in a computational model (WITNESS, Clark, Applied Cognitive Psychology 17:629–654, 2003), and explored the impact of the number of fillers (lineup size) and filler quality on simultaneous and sequential lineups (viewing lineups members in sequence), and compared both to showups. In limited situations, we found that filler siphoning can produce a simultaneous lineup performance advantage, but one that is insufficient in magnitude to explain empirical data. However, the magnitude of the empirical simultaneous lineup advantage can be approximated once criterial variability is added to the model. But this modification works by negatively impacting showups rather than promoting more filler siphoning. In sequential lineups, fillers were found to harm performance. Filler siphoning fails to clarify the relationship between simultaneous lineups and sequential lineups or showups. By incorporating constructs like filler siphoning and criterial variability into a computational model, and trying to approximate empirical data, we can sort through explanations of eyewitness decision-making, a prerequisite for policy recommendations.Charges for publication of this article sponsored by University of Oklahoma Libraries Open Access/Subvention Fund.Ye

    Theoretical vs. empirical discriminability:the application of ROC methods to eyewitness identification

    Get PDF
    Abstract ᅟ Receiver operating characteristic (ROC) analysis was introduced to the field of eyewitness identification 5 years ago. Since that time, it has been both influential and controversial, and the debate has raised an issue about measuring discriminability that is rarely considered. The issue concerns the distinction between empirical discriminability (measured by area under the ROC curve) vs. underlying/theoretical discriminability (measured by d’ or variants of it). Under most circumstances, the two measures will agree about a difference between two conditions in terms of discriminability. However, it is possible for them to disagree, and that fact can lead to confusion about which condition actually yields higher discriminability. For example, if the two conditions have implications for real-world practice (e.g., a comparison of competing lineup formats), should a policymaker rely on the area-under-the-curve measure or the theory-based measure? Here, we illustrate the fact that a given empirical ROC yields as many underlying discriminability measures as there are theories that one is willing to take seriously. No matter which theory is correct, for practical purposes, the singular area-under-the-curve measure best identifies the diagnostically superior procedure. For that reason, area under the ROC curve informs policy in a way that underlying theoretical discriminability never can. At the same time, theoretical measures of discriminability are equally important, but for a different reason. Without an adequate theoretical understanding of the relevant task, the field will be in no position to enhance empirical discriminability

    Response probability and response time: a straight line, the Tagging/Retagging interpretation of short term memory, an operational definition of meaningfulness and short term memory time decay and search time

    No full text
    The functional relationship between correct response probability and response time is investigated in data sets from Rubin, Hinton and Wenzel, J Exp Psychol Learn Mem Cogn 25:1161–1176, 1999 and Anderson, J Exp Psychol [Hum Learn] 7:326–343, 1981. The two measures are linearly related through stimulus presentation lags from 0 to 594 s in the former experiment and for repeated learning of words in the latter. The Tagging/Retagging interpretation of short term memory is introduced to explain this linear relationship. At stimulus presentation the words are tagged. This tagging level drops slowly with time. When a probe word is reintroduced the tagging level has to increase for the word to be properly identified leading to a delay in response time. The tagging time is related to the meaningfulness of the words used—the more meaningful the word the longer the tagging time. After stimulus presentation the tagging level drops in a logarithmic fashion to 50% after 10 s and to 20% after 240 s. The incorrect recall and recognition times saturate in the Rubin et al. data set (they are not linear for large time lags), suggesting a limited time to search the short term memory structure: the search time for recall of unusual words is 1.7 s. For recognition of nonsense words the corresponding time is about 0.4 s, similar to the 0.243 s found in Cavanagh (1972)

    The list length effect in recognition memory: an analysis of potential confounds

    No full text
    The list length effect in recognition memory refers to the finding that recognition performance for a short list is superior to that for a long list. The list length effect is consistent with the predictions of item noise models, but context noise models predict no effect. Recently, it has been argued that if potential confounds are controlled, the list length effect is eliminated. We report the results of two experiments in which we looked at the role of attention and the remember–know task in the detection of the list length effect. We conclude that there is no list length effect when potential confounds are controlled and that it is the design used to control for attention that is most vital.Angela Kinnell, Simon Denni
    corecore